4 research outputs found
Simple and Asymmetric Graph Contrastive Learning without Augmentations
Graph Contrastive Learning (GCL) has shown superior performance in
representation learning in graph-structured data. Despite their success, most
existing GCL methods rely on prefabricated graph augmentation and homophily
assumptions. Thus, they fail to generalize well to heterophilic graphs where
connected nodes may have different class labels and dissimilar features. In
this paper, we study the problem of conducting contrastive learning on
homophilic and heterophilic graphs. We find that we can achieve promising
performance simply by considering an asymmetric view of the neighboring nodes.
The resulting simple algorithm, Asymmetric Contrastive Learning for Graphs
(GraphACL), is easy to implement and does not rely on graph augmentations and
homophily assumptions. We provide theoretical and empirical evidence that
GraphACL can capture one-hop local neighborhood information and two-hop
monophily similarity, which are both important for modeling heterophilic
graphs. Experimental results show that the simple GraphACL significantly
outperforms state-of-the-art graph contrastive learning and self-supervised
learning methods on homophilic and heterophilic graphs. The code of GraphACL is
available at https://github.com/tengxiao1/GraphACL.Comment: Accepted to NeurIPS 202
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Graph Neural Networks (GNNs) have made rapid developments in the recent
years. Due to their great ability in modeling graph-structured data, GNNs are
vastly used in various applications, including high-stakes scenarios such as
financial analysis, traffic predictions, and drug discovery. Despite their
great potential in benefiting humans in the real world, recent study shows that
GNNs can leak private information, are vulnerable to adversarial attacks, can
inherit and magnify societal bias from training data and lack interpretability,
which have risk of causing unintentional harm to the users and society. For
example, existing works demonstrate that attackers can fool the GNNs to give
the outcome they desire with unnoticeable perturbation on training graph. GNNs
trained on social networks may embed the discrimination in their decision
process, strengthening the undesirable societal bias. Consequently, trustworthy
GNNs in various aspects are emerging to prevent the harm from GNN models and
increase the users' trust in GNNs. In this paper, we give a comprehensive
survey of GNNs in the computational aspects of privacy, robustness, fairness,
and explainability. For each aspect, we give the taxonomy of the related
methods and formulate the general frameworks for the multiple categories of
trustworthy GNNs. We also discuss the future research directions of each aspect
and connections between these aspects to help achieve trustworthiness